Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cumulative sum tensor operation #1722

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

allenqm
Copy link

@allenqm allenqm commented May 3, 2024

Pull Request Template

Starting a draft PR to align on a few things with maintainers as I dive into this.

Context: Per this convo, I wanted to add a cumulative product operation to burn.

My plan is to start with a cumulative sum operation. Then cumulative product can be developed using cumulative sum, log, and exp.

@nathanielsimard, Items to align on upfront:

  • The name of the function should be cumsum_dim. cumsum aligns with the pytorch api. In burn, operations that take an explicit dim argument seem to have a _dim suffix. Alternatively we could remove the suffix.
  • The function should be implemented for Float and Int tensorkinds, but not bool.
  • Backends: While the implementations for tch, candle, and ndarray seem straightforward, I have questions about jit. For jit, I cannot find a existing WGSL implementation for cumulative sum. Is the right approach in this situation to create a WGSL compute shader for it? Cumulative sum is probably hard to do well GPUs given the dependencies between elements, but I'm open to trying. I'm new to WGSL.

Checklist

  • Confirmed that run-checks all script has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Provide links to relevant issues and dependent PRs.

Changes

Summarize the problem being addressed and your solution.

Testing

Describe how these changes have been tested.

@louisfd
Copy link
Member

louisfd commented May 6, 2024

Hi @allenqm
Although you tagged Nathaniel, I think I can answer in his stead:

  • It's great that you made a default implementation for cumprod using log and exp 👍

  • For the naming, I think we can leave the _dim out because there cannot be a version for all dims at once. That's what we did for instance with sort, which acts on a dimension but for which no "global sort" exists.

  • Indeed, it must be implemented for int and float only, I saw you put it in numeric, which is the way to go 👍

  • For JIT, it is going to be a pain at the moment. There is no WGSL code anymore, the WGSL is always auto-generated from the intermediate JIT representation (kernels using the gpu! macro). It's honestly a pain to work with, it's not designed for new contributors to learn. I'm working on a language to rewrite them in an accessible way, see CubeCL: Compute Language Extension in Rust for Multi-target GPU kernels #1665, but it's not ready.

  • For the GPU algorithm, you're right that the dependancy between elements will make it difficult to have an efficient kernel. The straightforward way would be to spawn one thread for the whole dim to sum, and this thread fills all the output spots while accumulating the inputs in a local sum variable. But for large dim to sum it can be slow. Not sure if there's better solutions, I haven't done any research.

I'm willing to write that kernel in the JIT intermediate representation if you want, so the operation becomes available soon; then we can optimize it later and with the upcoming language.

@allenqm
Copy link
Author

allenqm commented May 6, 2024

@louisfd Thanks so much for the guidance.

I will remove _dim suffix.

Thanks for offering to step in and write the kernel in the JIT intermediate representation. I'll take you up on that.

I'm going to try and get the tch, candle, ndarray, and autodiff implementations done by EoD tomorrow.

Just to be clear: I haven't written anything specific for cumprod yet. I was proposing that if we implement cumsum, then cumprod will be more straightforward as it could be described without new backend implementations (with the exception of autodiff), using the existing implementations of cumsum, exp, and log. Let me know if my assessment here seems off.

tensor: NdArrayTensor<E, D>,
dim: usize,
) -> NdArrayTensor<E, D> {
let mut array = tensor.array.clone().into_owned();
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the underlying array struct of tensor needs to be cloned, since NdArray's method for accumulating elements along an axis modifies an array's data inplace. Referring to this method

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well float_cumsum takes ownership of the tensor, so I don't think the clone is required here.

@allenqm
Copy link
Author

allenqm commented May 7, 2024

tch, candle, ndarray, autodiff + tests, and tensor tests have been added. Going to work on the onnx section of the contributor book next.

no action needed, just fyi @louisfd

Copy link
Contributor

github-actions bot commented Jun 8, 2024

This PR has been marked as stale because it has not been updated for over a month

@github-actions github-actions bot added the stale The issue or pr has been open for too long label Jun 8, 2024
@allenqm
Copy link
Author

allenqm commented Jun 25, 2024

Sorry for not flipping this to "Ready for Review" @louisfd . I think I've got the required onnx files in place. Can you take a look?

@allenqm allenqm marked this pull request as ready for review June 25, 2024 21:45
@github-actions github-actions bot removed the stale The issue or pr has been open for too long label Jun 26, 2024
Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some minor comments, but overall great job 👏

Comment on lines +17 to +18
FloatCumsum,
IntCumsum,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CumSum is a single operator, the same for int and float.

So we should also only have one node type, BinaryType::Cumsum. See for example BinaryType::Sub, which does split the int and float tests for the generated onnx files but it's still a single operation/node.

@@ -770,6 +770,18 @@ pub trait IntTensorOps<B: Backend> {
/// The sum of all elements in the tensor along the dimension.
fn int_sum_dim<const D: usize>(tensor: IntTensor<B, D>, dim: usize) -> IntTensor<B, D>;

/// Cumulative Sum of all elements in a tensor along a dimension.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep the capitalization at "Cumulative sum"

@@ -842,6 +842,19 @@ pub trait FloatTensorOps<B: Backend> {
/// A tensor with the sum of all elements in `tensor` along `dim`.
fn float_sum_dim<const D: usize>(tensor: FloatTensor<B, D>, dim: usize) -> FloatTensor<B, D>;

/// Cumulative Sum of all elements in a tensor along a dimension.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing regarding capitalization

Comment on lines +806 to +821
/// Checks running dimension such as cumulative sum
pub(crate) fn running_dim<const D: usize>(ops: &str, dim: usize) -> Self {
let mut check = Self::Ok;

if dim > D {
check = check.register(
ops,
TensorError::new(format!(
"Can't perform a running calculation on a tensor with ({D}) dimensions on axis ({dim})"
)),
);
}

check
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could use the existing TensorCheck::dim_ops instead

Comment on lines +787 to +795
match &lhs {
Type::Tensor(x) => match x.kind {
TensorKind::Int => BinaryNode::int_cumsum(lhs, rhs, output),
TensorKind::Float => BinaryNode::float_cumsum(lhs, rhs, output),
_ => panic!("cumsum function requires LHS to be int or float type"),
},
_ => panic!("cumsum function only supports LHS tensor type"),
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By making cumsum a single node it should simplify this block

tensor: NdArrayTensor<E, D>,
dim: usize,
) -> NdArrayTensor<E, D> {
let mut array = tensor.array.clone().into_owned();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well float_cumsum takes ownership of the tensor, so I don't think the clone is required here.

tensor: NdArrayTensor<i64, D>,
dim: usize,
) -> NdArrayTensor<i64, D> {
let mut array = tensor.array.clone().into_owned();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment for float_cumsum

@antimora antimora requested review from laggui and removed request for ashdtu June 27, 2024 22:45
Copy link
Contributor

This PR has been marked as stale because it has not been updated for over a month

@github-actions github-actions bot added the stale The issue or pr has been open for too long label Jul 28, 2024
@antimora antimora added the help wanted Extra attention is needed label Aug 4, 2024
@antimora
Copy link
Collaborator

antimora commented Aug 4, 2024

Labeled this PR as needs help completing it. It would be a great feature to have in Burn.

@allenqm
Copy link
Author

allenqm commented Aug 5, 2024 via email

@antimora antimora mentioned this pull request Aug 5, 2024
9 tasks
@github-actions github-actions bot removed the stale The issue or pr has been open for too long label Aug 5, 2024
Copy link
Contributor

github-actions bot commented Sep 4, 2024

This PR has been marked as stale because it has not been updated for over a month

@github-actions github-actions bot added the stale The issue or pr has been open for too long label Sep 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed stale The issue or pr has been open for too long
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants